Linear Transformations

Linear Mapping

Definition: Let VV and WW be linear spaces over the same field FF. A mapping T:V→W\mathcal{T}: V \rightarrow W is called a linear mapping satisfying

T(ax+by)=aT(x)+bT(y)      ∀x,y∈V      ∀a,b∈F\mathcal{T}(ax+by) = a\mathcal{T}(x) + b\mathcal{T}(y) \ \ \ \ \ \ \forall x,y \in V \ \ \ \ \ \ \forall a,b \in F

here VV is called the domain of TT and WW is called the codomain of T\mathcal{T}.


Example: Let V=WV = W polynomials of degree less than nn in SS; T=dds\mathcal{T} = \frac{d}{ds}

Solution: Let p,q∈Vp,q \in V and α1,α2∈F\alpha_1,\alpha_2 \in F then ,
p(s) = ∑i=0n−1aisi\sum_{i=0}^{n-1} a_is^i and q(s) = ∑i=0n−1bisi\sum_{i=0}^{n-1} b_is^i
α1p(s)+α2q(s)=∑i=0n−1(α1ai+α2bi)si\alpha_1p(s) + \alpha_2q(s) = \sum_{i=0}^{n-1} (\alpha_1a_i + \alpha_2b_i)s^i
dds(α1p(s)+α2q(s))=∑i=0n−1(α1ai+α2bi)isi−1=α1∑i=0n−1aisi−1+α2∑i=0n−1bisi−1=α1dpds+α2dqds\frac{d}{ds}(\alpha_1p(s) + \alpha_2q(s)) = \sum_{i=0}^{n-1} (\alpha_1a_i + \alpha_2b_i)is^{i-1} = \alpha_1\sum_{i=0}^{n-1} a_is^{i-1} + \alpha_2\sum_{i=0}^{n-1} b_is^{i-1} = \alpha_1\frac{dp}{ds} + \alpha_2\frac{dq}{ds}
T(α1p+α2q)=dds(α1p+α2q)=α1dpds+α2dqds=α1T(p)+α2T(q) ■\mathcal{T}(\alpha_1p+\alpha_2q) = \frac{d}{ds}(\alpha_1p+\alpha_2q) = \alpha_1\frac{dp}{ds} + \alpha_2\frac{dq}{ds} = \alpha_1\mathcal{T}(p) + \alpha_2\mathcal{T}(q) \ \blacksquare


Example: Let V=W=R2V = W = \mathbb{R^2}. Let A\mathcal{A} be defined as,

A=[α1α1+α2]where x=[α1α2]\mathcal{A} = \begin{bmatrix} \alpha_1 \\ \alpha_1 + \alpha_2 \end{bmatrix} \text{where } x = \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix}

Solution: Let a,b∈Fa,b \in F and x1,x2∈Xx_1,x_2 \in X with x1=[α1α2]x_1 = \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} and x2=[β1β2]x_2 = \begin{bmatrix} \beta_1 \\ \beta_2 \end{bmatrix} then,
A(ax1+bx2)=A(a[α1α2]+b[β1β2])=A([aα1aα2]+[bβ1bβ2])=A([aα1+bβ1aα2+bβ2])=[aα1+bβ1aα1+aα2+bβ1+bβ2]=a[α1α1+α2]+b[β1β1+β2]=aA(x1)+bA(x2) ■\mathcal{A}(ax_1 + bx_2) = \mathcal{A}(a\begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} + b\begin{bmatrix} \beta_1 \\ \beta_2 \end{bmatrix}) = \mathcal{A}(\begin{bmatrix} a\alpha_1 \\ a\alpha_2 \end{bmatrix} + \begin{bmatrix} b\beta_1 \\ b\beta_2 \end{bmatrix}) = \mathcal{A}(\begin{bmatrix} a\alpha_1 + b\beta_1 \\ a\alpha_2 + b\beta_2 \end{bmatrix}) = \begin{bmatrix} a\alpha_1 + b\beta_1 \\ a\alpha_1 + a\alpha_2 + b\beta_1 + b\beta_2 \end{bmatrix} = a\begin{bmatrix} \alpha_1 \\ \alpha_1 + \alpha_2 \end{bmatrix} + b\begin{bmatrix} \beta_1 \\ \beta_1 + \beta_2 \end{bmatrix} = a\mathcal{A}(x_1) + b\mathcal{A}(x_2) \ \blacksquare


Example: Let V=W=RV = W = \mathbb{R}. is  Ax=(1−x) \ \mathcal{A}x = (1-x) linear or not ?

Solution: Let a,b∈Fa,b \in F and x1,x2∈Xx_1,x_2 \in X then,

A(ax1+bx2)=?aA(x1)+bA(x2)1−(ax1+bx2)=?a(1−x1)+b(1−x2)1−ax1−bx2=?a−ax1+b−bx21≠a+b  ∀a,b∈F ■ hence not linear  \begin{align*} \mathcal{A}(ax_1 + bx_2) &\stackrel{?}{=} a\mathcal{A}(x_1) + b\mathcal{A}(x_2) \\ 1 - (ax_1 + bx_2) & \stackrel{?}{=} a(1-x_1) + b(1-x_2) \\ 1 - ax_1 - bx_2 & \stackrel{?}{=} a - ax_1 + b - bx_2 \\ 1 & \neq a + b \ \ \forall a,b \in F \ \blacksquare\\ & \text{ hence not linear } \ \end{align*}

Rotation transformations in R2\mathbb{R}^2 are linear transformations.
Integration and differentiation are linear transformations.

Definition: Given a linear mapping T:V→W\mathcal{T}: V \rightarrow W, the set of all vectors x∈Vx \in V such that T(x)=0W\mathcal{T}(x) = 0_W is called the null space of T\mathcal{T} and is denoted by N(T)N(\mathcal{T}). That is,

N(T):={x∈V : T(x)=0W}N(\mathcal{T}) := \{x \in V \ : \ \mathcal{T}(x) = 0_W\}

Definition: Given a linear mapping T:V→W\mathcal{T}: V \rightarrow W, the set of all vectors w∈Ww \in W such that w=T(v)w = \mathcal{T}(v) for some v∈Vv \in V is called the range of T\mathcal{T} and is denoted by R(T)R(\mathcal{T}). That is,

R(T):={w∈W : w=T(v) for some v∈V}R(\mathcal{T}) := \{w \in W \ : \ w = \mathcal{T}(v) \ \text{for some} \ v \in V\}

Claim: For a given linear mapping T:V→W\mathcal{T}: V \rightarrow W, N(T)N(\mathcal{T}) is a linear subspace of VV.

Proof: Let x1,x2∈N(T)x_1,x_2 \in N(\mathcal{T}) and a∈Fa \in F show,

(S1). x1+x2∈N(T)x_1 + x_2 \in N(\mathcal{T})
(S2). ax1∈N(T)ax_1 \in N(\mathcal{T})

1- T(x1+x2)=T(x1)+T(x2)=0W+0W=0W  ⟹  x1+x2∈N(T)\mathcal{T}(x_1 + x_2) = \mathcal{T}(x_1) + \mathcal{T}(x_2) = 0_W + 0_W = 0_W \implies x_1 + x_2 \in N(\mathcal{T})
2- T(ax1)=aT(x1)=a0W=0W  ⟹  ax1∈N(T) ■\mathcal{T}(ax_1) = a\mathcal{T}(x_1) = a0_W = 0_W \implies ax_1 \in N(\mathcal{T}) \ \blacksquare


Claim: For a given linear mapping T:V→W\mathcal{T}: V \rightarrow W, R(T)R(\mathcal{T}) is a subspace of WW.

Proof: Let x1,x2∈R(T)x_1,x_2 \in R(\mathcal{T}) and a∈Fa \in F show,

(S1). x1+x2∈R(T)x_1 + x_2 \in R(\mathcal{T})
(S2). ax1∈R(T)ax_1 \in R(\mathcal{T})

Definition: A linear transformation T:V→W\mathcal{T}: V \rightarrow W is called one-to-one if x1≠x2x_1 \neq x_2 implies T(x1)≠T(x2)\mathcal{T}(x_1) \neq \mathcal{T}(x_2) for all x1,x2∈Vx_1,x_2 \in V.


Theorem: Let T:V→W\mathcal{T}: V \rightarrow W be a linear transformation. Then mapping T\mathcal{T} is one-to-one if and only if N(T)={0V}N(\mathcal{T}) = \{0_V\}.

Proof: We will prove the statement by contrapositive. Since it is an if and only if statement, we will prove both directions.

(Bacward direction) Assume that N(T)={0V}N(\mathcal{T}) = \{0_V\} and T(x1)=T(x2)\mathcal{T}(x_1) = \mathcal{T}(x_2) for some x1,x2∈Vx_1,x_2 \in V. Then,
T(x1)−T(x2)=0W\mathcal{T}(x_1) - \mathcal{T}(x_2) = 0_W
T(x1−x2)=0W\mathcal{T}(x_1 - x_2) = 0_W
x1−x2∈N(T)x_1 - x_2 \in N(\mathcal{T})
x1−x2=0Vx_1 - x_2 = 0_V
x1=x2x_1 = x_2
T\mathcal{T} is one-to-one.

(Forward direction) Assume that T\mathcal{T} is one-to-one and x∈N(T)x \in N(\mathcal{T}). Then,
T(x)=0W\mathcal{T}(x) = 0_W
T(0V)=0W\mathcal{T}(0_V) = 0_W
x=0Vx = 0_V
N(T)={0V}N(\mathcal{T}) = \{0_V\} â– \blacksquare

Definition: A linear transformation T:V→W\mathcal{T}: V \rightarrow W is called onto if R(T)=WR(\mathcal{T}) = W, otherwise if R(T)⊂WR(\mathcal{T}) \subset W then T\mathcal{T} is called into.


Example: Let V:={f:[0,1]→R and f is integrable}V := \{f:[0, 1] \rightarrow \mathbb{R} \text{ and f is integrable} \}. A transformation A:V→R\mathcal{A}: V \rightarrow \mathbb{R} is defined as,

A(f(s))=∫01f(s)dsis A one-to-one ?\mathcal{A}(f(s)) = \int_{0}^{1} f(s)ds \\ \text{is $\mathcal{A} $ one-to-one ?}

Solution: Integration operation resulting in one-to-one transformation probably not true. Hence we can exploit the fact that the integration might result in zero.

Let f(s)=2s−1f(s) = 2s-1 then,

A(f(s))=∫01(2s−1)ds=[s2−s]01=0\mathcal{A}(f(s)) = \int_{0}^{1} (2s-1)ds = \left[s^2-s\right]_{0}^{1} = 0
A(f(s))=0\mathcal{A}(f(s)) = 0
Then A(0)=0w\mathcal{A}(0) = 0_w and A(f(s))=0w\mathcal{A}(f(s)) = 0_w for some f(s)≠0f(s) \neq 0.
A\mathcal{A} is not one-to-one.

moreover,

Let $f(s) = a then,

A(f(s))=∫01ads=[as]01=a\mathcal{A}(f(s)) = \int_{0}^{1} a ds = \left[as\right]_{0}^{1} = a
Shows that A\mathcal{A} is onto.

Linear Transformations

Matrix Representations

Definition: Let T:V→W\mathcal{T}: V \rightarrow W be a linear transformation with dim(V)=ndim(V) = n and dim(W)=mdim(W) = m. Let B={v1,v2,...,vn}\mathcal{B} = \{v_1,v_2,...,v_n\} be a basis for VV and C={w1,w2,...,wm}\mathcal{C} = \{w_1,w_2,...,w_m\} be a basis for WW. Then, the matrix representation of T\mathcal{T} with respect to B\mathcal{B} and C\mathcal{C} is the m×nm \times n matrix AA such that,

[w]c=[w1w2â‹®wm][v]b=[v1v2â‹®vn] [w]_c = \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_m \end{bmatrix} [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
[T]BC=[a11a12⋯a1na21a22⋯a2n⋮⋮⋱⋮am1am2⋯amn] [\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}
T(vj)=∑i=1maijwi for j=1,2,...,n \mathcal{T}(v_j) = \sum_{i=1}^{m} a_{ij}w_i \ \text{for} \ j = 1,2,...,n

Remark: The matrix representation of T\mathcal{T} with respect to B\mathcal{B} and C\mathcal{C} is denoted by [T]BC[\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}}.

Now we have a transformation represented as,

[w]c=[T]BC[v]b [w]_c = [\mathcal{T}]_{\mathcal{B}}^{\mathcal{C}} [v]_b

A formal procedure to obtain the matrix representation of a linear transformation

  1. Take each basis vector vjv_j in B\mathcal{B}
  2. Apply A\mathcal{A} to vjv_j : A(vj)\mathcal{A}(v_j)
  3. Express the result in terms of the basis vectors in C\mathcal{C} : A(vj)=∑i=1maijwi\mathcal{A}(v_j) = \sum_{i=1}^{m} a_{ij}w_i
  4. The jjth column of [A]BC[\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} is the vector [a1ja2jâ‹®amj]\begin{bmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{mj} \end{bmatrix}

Example: V={Polynomials of degree less than 3}V = \{Polynomials \ of \ degree \ less \ than \ 3\} and W={Polynomials of degree less than 2}W = \{Polynomials \ of \ degree \ less \ than \ 2\}
Let A:V→W\mathcal{A}: V \rightarrow W be defined as,

A(p(s))=dp(s)ds\mathcal{A}(p(s)) = \frac{dp(s)}{ds}

Find the matrix representation of A\mathcal{A} with respect to the bases B={1,1+s,1+s+s2,1+s+s2+s3}\mathcal{B} = \{1, 1+s, 1+s+s^2, 1+s+s^2+s^3\} and C={1,1+s,1+s+s2}\mathcal{C} = \{1, 1+s, 1+s+s^2\}.

Solution: [w]c=[w1w2w3] and [v]b=[v1v2v3v4][w]_c = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} \ \text{and} \ [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix}

A(v1)=dds(1)=0=0w1+0w2+0w3A(v2)=dds(1+s)=1=1w1+0w2+0w3A(v3)=dds(1+s+s2)=1+2s=−1w1+2w2+0w3A(v4)=dds(1+s+s2+s3)=1+2s+3s2=−1w1−1w2+3w3\begin{align*} \mathcal{A}(v_1) &= \frac{d}{ds}(1) = 0 = 0w_1 + 0w_2 + 0w_3 \\ \mathcal{A}(v_2) &= \frac{d}{ds}(1+s) = 1 = 1w_1 + 0w_2 + 0w_3 \\ \mathcal{A}(v_3) &= \frac{d}{ds}(1+s+s^2) = 1+2s = -1w_1 + 2w_2 + 0w_3 \\ \mathcal{A}(v_4) &= \frac{d}{ds}(1+s+s^2+s^3) = 1+2s+3s^2 = -1w_1 -1w_2 + 3w_3 \\ \end{align*}
[A]BC=[01−1−1002−10003] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix}

The full matrix representation of A\mathcal{A} is,

[A]BC=[01−1−1002−10003][111101110011]=[010000210003] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 3 \end{bmatrix}

Example: Let V=R2V = \mathbb{R}^2 and A:V→V\mathcal{A}: V \rightarrow V be defined as,

A(x)=[01−10]x+x[0−110]\mathcal{A}(x) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}x + x \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}

Find the matrix representation of A\mathcal{A} with respect to the bases

B={[1000],[0100],[0010],[0001]}\mathcal{B} = \{\begin {bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \begin {bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}\}. and

C={[1000],[1100],[1110],[1111]}\mathcal{C} = \{\begin {bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}, \begin {bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\}.

Solution: [w]c=[w1w2w3w4] and [v]b=[v1v2v3v4][w]_c = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \\ w_4 \end{bmatrix} \ \text{and} \ [v]_b = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ v_4 \end{bmatrix}

A(v1)=[01−10][1000]+[1000][0−110]=[00−10]+[0−100]=[0−1−10]=1w1+0w2−1w3+0w4A(v2)=[01−10][0100]+[0100][0−110]=[000−1]+[0−100]=[100−1]=1w1+0w2+1w3−1w4A(v3)=[01−10][0010]+[0010][0−110]=[000−1]+[0010]=[100−1]=1w1+0w2+1w3−1w4A(v4)=[01−10][0001]+[0001][0−110]=[00−10]+[0−100]=[0110]=−1w1+0w2+1w3+0w4\begin{align*} \mathcal{A}(v_1) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -1 \\ -1 & 0 \end{bmatrix} = 1w_1 +0w_2 -1w_3 +0w_4 \\ \mathcal{A}(v_2) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = 1w_1 +0w_2 +1w_3 -1w_4 \\ \mathcal{A}(v_3) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = 1w_1 +0w_2 +1w_3 -1w_4 \\ \mathcal{A}(v_4) &= \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = -1w_1 +0w_2 +1w_3 +0w_4 \\ \end{align*}
[A]BC=[111−10000−11110−1−10] [\mathcal{A}]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} 1 & 1 & 1 & -1 \\ 0 & 0 & 0 & 0 \\ -1 & 1 & 1 & 1 \\ 0 & -1 & -1 & 0 \end{bmatrix}
Linear Transformations

Change of Basis

Definition: Let B={v1,v2,...,vn}\mathcal{B} = \{v_1,v_2,...,v_n\} and C={w1,w2,...,wn}\mathcal{C} = \{w_1,w_2,...,w_n\} be two bases for a linear space VV. The change of basis matrix from B\mathcal{B} to C\mathcal{C} is the n×nn \times n matrix PP such that,

[w]C=A[v]B [w]_C = A [v]_B
[w]C=Aˉ[v]Bˉ [w]_C = \bar{A} [v]_{\bar{B}}

We know that a change of basis is a linear transformation. Hence,

[v]B=P[v]Bˉ [v]_B = P [v]_{\bar{B}}
[w]C=AP[v]Bˉ [w]_C = A P [v]_{\bar{B}}

in codomain perspective,

[w]C=Q[w]Cˉ [w]_C = Q [w]_{\bar{C}}
[w]Cˉ=Q−1A[v]B [w]_{\bar{C}} = Q^{-1}A [v]_{B}
[w]Cˉ=Q−1AP[v]Bˉ [w]_{\bar{C}} = Q^{-1}A P [v]_{\bar{B}}

Example:

V={Polynomials with degree less than 3}W={Polynomials with degree less than 2}B={1,1+s,1+s+s2,1+s+s2+s3}C={1,1+s,1+s+s2}A=[01−1−1002−10003]Bˉ={1,s,s2,s3}\begin{align*} V &= \{ \text{Polynomials with degree less than 3} \} \\ W &= \{ \text{Polynomials with degree less than 2} \} \\ \mathcal{B} &= \{1, 1+s, 1+s+s^2, 1+s+s^2+s^3\} \\ \mathcal{C} &= \{1, 1+s, 1+s+s^2\} \\ A &= \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \\ \bar{B} &= \{1,s,s^2,s^3\} \\ \end{align*}

Solution: First we will find the change of basis in the domain matrix from B\mathcal{B} to Bˉ\mathcal{\bar{B}}. That is more clearly stated as,

[w]C=Aˉ[v]Bˉ [w]_C = \bar{A} [v]_{\bar{B}}

and given [v]B=P[v]Bˉ[v]_B = P [v]_{\bar{B}}, Aˉ\bar{A} is equal to,

[w]C=AP[v]Bˉ [w]_C = A P [v]_{\bar{B}}

In order to find PP we need to write the basis vectors in Bˉ\mathcal{\bar{B}} in terms of B\mathcal{B}.

1=1(1)+0(1+s)+0(1+s+s2)+0(1+s+s2+s3)s=−1(1)+1(1+s)+0(1+s+s2)+0(1+s+s2+s3)s2=0(1)+−1(1+s)+1(1+s+s2)+0(1+s+s2+s3)s3=0(1)+0(1+s)+−1(1+s+s2)+1(1+s+s2+s3)\begin{align*} 1 &= 1(1) + 0(1+s) + 0(1+s+s^2) + 0(1+s+s^2+s^3) \\ s &= -1(1) + 1(1+s) + 0(1+s+s^2) + 0(1+s+s^2+s^3) \\ s^2 &= 0(1) + -1(1+s) + 1(1+s+s^2) + 0(1+s+s^2+s^3) \\ s^3 &= 0(1) + 0(1+s) + -1(1+s+s^2) + 1(1+s+s^2+s^3) \\ \end{align*}
P=[1−10001−10001−10001] P = \begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{bmatrix}

now Aˉ\bar{A} is equal to,

Aˉ=AP=[01−1−1002−10003][1−10001−10001−10001]=[01−20002−30003] \bar{A} = A P = \begin{bmatrix} 0 & 1 & -1 & -1 \\ 0 & 0 & 2 & -1 \\ 0 & 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & -2 & 0 \\ 0 & 0 & 2 & -3 \\ 0 & 0 & 0 & 3 \end{bmatrix}

Now we will change the basis in the codomain to canonical basis while keeping the basis of the domain as also in canonical form. That is,

[w]Cˉ=Q−1AP[v]Bˉ [w]_{\bar{C}} = Q^{-1} A P [v]_{\bar{B}}
[w]C=Q[w]Cˉ [w]_C = Q [w]_{\bar{C}}
[w]Cˉ=Q−1[w]C [w]_{\bar{C}} = Q^{-1} [w]_C

In order to find Q−1Q^{-1} in a single step, we can write the basis vectors in C\mathcal{C} in terms of Cˉ\mathcal{\bar{C}}.

1=1(1)+0(s)+0(s2)1+s=1(1)+1(s)+0(s2)1+s+s2=1(1)+1(s)+1(s2)\begin{align*} 1 &= 1(1) + 0(s) + 0(s^2) \\ 1+s &= 1(1) + 1(s) + 0(s^2) \\ 1+s+s^2 &= 1(1) + 1(s) + 1(s^2) \\ \end{align*}
Q−1=[111011001] Q^{-1} = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix}

as the final steps,

[w]Cˉ=Q−1Aˉ[v]Bˉ [w]_{\bar{C}} = Q^{-1} \bar{A} [v]_{\bar{B}}
Q−1Aˉ=[111011001][01−20002−30003]=[010000200003] Q^{-1} \bar{A} = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 & -2 & 0 \\ 0 & 0 & 2 & -3 \\ 0 & 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{bmatrix}
[w]Cˉ=[010000200003][v]Bˉ [w]_{\bar{C}} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{bmatrix} [v]_{\bar{B}}

Given the matrix representation of a linear transformation A:V→W\mathcal{A}:V \rightarrow W with respect to bases B\mathcal{B} and C\mathcal{C}, one can draw the following diagram.

flowchart LR B["[v]_b"] & A["[v]_b*"] -.- C{"v"} B --[A]--> F C --A--> D{"w"} D --A'--> C D -.- E["[w]_c*"] & F["[w]_c"] A --[Q`AP]--> E A <--[P]--> B E <--[Q]--> F

#EE501 - Linear Systems Theory at METU